perm filename RELDEF.MEN[F75,JMC] blob sn#194392 filedate 1975-12-26 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	DEFINITIONS RELATIVE TO AN APPROXIMATE THEORY
C00007 ENDMK
CāŠ—;
DEFINITIONS RELATIVE TO AN APPROXIMATE THEORY

	Since this idea seems difficult and since I don't think the
idea is in its final form, I shall start with an example.  The example
is the counterfactual conditional statement by one ski instructor
to another, %2"If he had bent his knees properly when making that
turn he wouldn't have fallen."%1 and the other instructor's reply, %2No,
his trouble was that he didn't put his weight on his downhill ski."%1

	This example is chosen, because it uses counterfactuals realistically
in that the correct subsequent instruction of the student
may depend on which counterfactural was true.
For artificial intelligence purposes, it is necessary to make
the point that we will have to program computers to make and
use counterfactuals.

	Our idea is that these counterfactual sentences are not assertions
about the world directly.  Instead they are assertions within a certain
approximate theory of the world.   
In this theory, the will of the skier, his body, and the external
world are regarded as a system of interacting automata.  Within the
theory, the question of what would have happened had the skier bent
his knee makes sense.  It is a question about a related automaton
system in which the output from the skier's mind to his body is
replaced by an input from the outside, and the question about what
would have happened had the skier behaved differently is a question
about that automaton system.

	The theory within which the counterfactual sentences make
sense is quite elaborate.  It involves grouping certain functions
of the world-state into configurations called the state-of-mind
of the skier, his state-of-body, etc.  It is supported by the
fact that the future state of these functions can be predicted
from their present state, but the fact that the ski instructors
have this theory and not some other is a function of their whole
life experiences.  The theory is presumably a good approximation.

	Our point is that the counterfactual "If he had bent his
knees he wouldn't have fallen" has no meaning as a sentence about
the world per se, i.e. its meaning cannot be defined in terms of
what is observable at that time.  However, it may have a definite
truth value in terms of the elaborate theory of human action jointly
held by the ski instructors.  On the other hand, such theories
are often not completely autonomous - the future state is not
uniquely determined by the present - so it is not guaranteed that
the theory of skiing can give a definite answer to the question.

	Among mental phenomena, it would seem that any belief
that one %2can%1 do something but perhaps won't is based on
approximations of the world by systems of interacting automata.
An explication of %2can%1 in such terms is elaborated in (McCarthy
and Hayes 1970), but it could be done better with a less restricted
notion of system of interacting automaton, and the notion of
approximation is not explored in that paper.